338 research outputs found
NetSentry: A deep learning approach to detecting incipient large-scale network attacks
Machine Learning (ML) techniques are increasingly adopted to tackle
ever-evolving high-profile network attacks, including DDoS, botnet, and
ransomware, due to their unique ability to extract complex patterns hidden in
data streams. These approaches are however routinely validated with data
collected in the same environment, and their performance degrades when deployed
in different network topologies and/or applied on previously unseen traffic, as
we uncover. This suggests malicious/benign behaviors are largely learned
superficially and ML-based Network Intrusion Detection System (NIDS) need
revisiting, to be effective in practice. In this paper we dive into the
mechanics of large-scale network attacks, with a view to understanding how to
use ML for Network Intrusion Detection (NID) in a principled way. We reveal
that, although cyberattacks vary significantly in terms of payloads, vectors
and targets, their early stages, which are critical to successful attack
outcomes, share many similarities and exhibit important temporal correlations.
Therefore, we treat NID as a time-sensitive task and propose NetSentry, perhaps
the first of its kind NIDS that builds on Bidirectional Asymmetric LSTM
(Bi-ALSTM), an original ensemble of sequential neural models, to detect network
threats before they spread. We cross-evaluate NetSentry using two practical
datasets, training on one and testing on the other, and demonstrate F1 score
gains above 33% over the state-of-the-art, as well as up to 3 times higher
rates of detecting attacks such as XSS and web bruteforce. Further, we put
forward a novel data augmentation technique that boosts the generalization
abilities of a broad range of supervised deep learning algorithms, leading to
average F1 score gains above 35%
Max-Min Fair Resource Allocation in Millimetre-Wave Backhauls
5G mobile networks are expected to provide pervasive high speed wireless
connectivity, to support increasingly resource intensive user applications.
Network hyper-densification therefore becomes necessary, though connecting to
the Internet tens of thousands of base stations is non-trivial, especially in
urban scenarios where optical fibre is difficult and costly to deploy. The
millimetre wave (mm-wave) spectrum is a promising candidate for inexpensive
multi-Gbps wireless backhauling, but exploiting this band for effective
multi-hop data communications is challenging. In particular, resource
allocation and scheduling of very narrow transmission/ reception beams requires
to overcome terminal deafness and link blockage problems, while managing
fairness issues that arise when flows encounter dissimilar competition and
traverse different numbers of links with heterogeneous quality. In this paper,
we propose WiHaul, an airtime allocation and scheduling mechanism that
overcomes these challenges specific to multi-hop mm-wave networks, guarantees
max-min fairness among traffic flows, and ensures the overall available
backhaul resources are fully utilised. We evaluate the proposed WiHaul scheme
over a broad range of practical network conditions, and demonstrate up to 5
times individual throughput gains and a fivefold improvement in terms of
measurable fairness, over recent mm-wave scheduling solutions
Maximising the Utility of Enterprise Millimetre-Wave Networks
Millimetre-wave (mmWave) technology is a promising candidate for meeting the
intensifying demand for ultra fast wireless connectivity, especially in
high-end enterprise networks. Very narrow beam forming is mandatory to mitigate
the severe attenuation specific to the extremely high frequency (EHF) bands
exploited. Simultaneously, this greatly reduces interference, but generates
problematic communication blockages. As a consequence, client association
control and scheduling in scenarios with densely deployed mmWave access points
become particularly challenging, while policies designed for traditional
wireless networks remain inappropriate. In this paper we formulate and solve
these tasks as utility maximisation problems under different traffic regimes,
for the first time in the mmWave context. We specify a set of low-complexity
algorithms that capture distinctive terminal deafness and user demand
constraints, while providing near-optimal client associations and airtime
allocations, despite the problems' inherent NP-completeness. To evaluate our
solutions, we develop an NS-3 implementation of the IEEE 802.11ad protocol,
which we construct upon preliminary 60GHz channel measurements. Simulation
results demonstrate that our schemes provide up to 60% higher throughput as
compared to the commonly used signal strength based association policy for
mmWave networks, and outperform recently proposed load-balancing oriented
solutions, as we accommodate the demand of 33% more clients in both static and
mobile scenarios.Comment: 22 pages, 12 figures, accepted for publication in Computer
Communication
Dead on Arrival: An Empirical Study of The Bluetooth 5.1 Positioning System
The recently released Bluetooth 5.1 specification introduces fine-grained
positioning capabilities in this wireless technology, which is deemed essential
to context-/location-based Internet of Things (IoT) applications. In this
paper, we evaluate experimentally, for the first time, the accuracy of a
positioning system based on the Angle of Arrival (AoA) mechanism adopted by the
Bluetooth standard. We first scrutinize the fidelity of angular detection and
then assess the feasibility of using angle information from multiple fixed
receivers to determine the position of a device. Our results reveal that
angular detection is limited to a restricted range. On the other hand, even in
a simple deployment with only two antennas per receiver, the AoA-based
positioning technique can achieve sub-meter accuracy; yet attaining
localization within a few centimeters remains a difficult endeavor. We then
demonstrate that a malicious device may be able to easily alter the
truthfulness of the measured AoA, by tampering with the packet structure. To
counter this protocol weakness, we propose simple remedies that are missing in
the standard, but which can be adopted with little effort by manufacturers, to
secure the Bluetooth 5.1 positioning system.Comment: 8 pages, 11 figure
Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement Learning
Embedding covert streams into a cover channel is a common approach to
circumventing Internet censorship, due to censors' inability to examine
encrypted information in otherwise permitted protocols (Skype, HTTPS, etc.).
However, recent advances in machine learning (ML) enable detecting a range of
anti-censorship systems by learning distinct statistical patterns hidden in
traffic flows. Therefore, designing obfuscation solutions able to generate
traffic that is statistically similar to innocuous network activity, in order
to deceive ML-based classifiers at line speed, is difficult.
In this paper, we formulate a practical adversarial attack strategy against
flow classifiers as a method for circumventing censorship. Specifically, we
cast the problem of finding adversarial flows that will be misclassified as a
sequence generation task, which we solve with Amoeba, a novel reinforcement
learning algorithm that we design. Amoeba works by interacting with censoring
classifiers without any knowledge of their model structure, but by crafting
packets and observing the classifiers' decisions, in order to guide the
sequence generation process. Our experiments using data collected from two
popular anti-censorship systems demonstrate that Amoeba can effectively shape
adversarial flows that have on average 94% attack success rate against a range
of ML algorithms. In addition, we show that these adversarial flows are robust
in different network environments and possess transferability across various ML
models, meaning that once trained against one, our agent can subvert other
censoring classifiers without retraining
- …